Please ask about problems and questions regarding this tutorial on answers.ros.org. Don't forget to include in your question the link to this page, the versions of your OS & ROS, and also add appropriate tags. |
Manipulate tabletop objects with REEM
Description: This tutorial will show you how to use the manipulation action server with REEMTutorial Level: ADVANCED
Contents
Launch REEM simulation with the necessary nodes
roslaunch reem_tabletop_grasping reem_simulation.launch
Here you will get REEM in a world with a table and some objects over it. Rviz will open with some pre-configured displays to help you understand what is going on. Depending on your machine this may take a while to load, around half a minute to a minute with a current computer.
The manipulation server uses the pick and place interface of MoveIt! and handles using the different manipulation groups of the robot and the state of grasped objects.
Send a pickup goal to the manipulation action server
You can either execute the example script
rosrun reem_tabletop_grasping send_pick.py
Or create directly a goal of the action server running:
rosrun actionlib axclient.py /object_manipulation_server
With a goal that looks like:
operation: 1 group: 'right_arm_torso' target_pose: header: seq: 0 stamp: secs: 0 nsecs: 0 frame_id: 'base_link' pose: position: x: 0.3 y: -0.3 z: 1.1 orientation: x: 0.0 y: 0.0 z: 0.0 w: 1.0
You can check a video of it working here:
The manipulation server will try to grasp the closer cluster over the table to the pose given. This is handy to be able to use any kind of input pose (object recognition algorithms, voice spoken positions...).
Take into account that this is a work in progress and it's subject to changes and it may not work 100% of the times.
Send a place goal to the manipulation action server
You can either execute the example script
rosrun reem_tabletop_grasping send_place.py
Or create directly a goal of the action server running:
rosrun actionlib axclient.py /object_manipulation_server
With a goal that looks like:
operation: 2 group: 'right_arm_torso' target_pose: header: seq: 0 stamp: secs: 0 nsecs: 0 frame_id: 'base_link' pose: position: x: 0.3 y: -0.3 z: 1.1 orientation: x: 0.0 y: 0.0 z: 0.0 w: 1.0
What is going on under the hood?
Once you send a goal to the manipulation server...
Clustering over the table pointcloud gets triggered
Using object_recognition_clusters it segments the current pointcloud by planes found as tables and it clusters the blobs over the tables giving an id to each one.
Which one to pick gets decided
Based on the pose given in the goal message we search the closer cluster to pick up.
A MoveIt! pickup goal gets created and sent
We fulfill a MoveIt! pickup goal configured for our robot which in itself contains a list of possible grasps to use given by moveit_simple_grasps. This grasps are being worked on to give a major variety of positions with a more human-like preference for the final solution.
Monitorization of the status of the goal and of the environment
Report of what is going on is published in the feedback topic of the actionserver and if objects are picked this is being tracked by the action server (we can't place an object without an object in our hand, right?).